Note: When clicking on a Digital Object Identifier (DOI) number, you will be taken to an external site maintained by the publisher.
Some full text articles may not yet be available without a charge during the embargo (administrative interval).
What is a DOI Number?
Some links on this page may take you to non-federal websites. Their policies may differ from this site.
-
Recommender Systems ( RecSys ) provide suggestions in many decision-making processes. Given that groups of people can perform many real-world activities (e.g., a group of people attending a conference looking for a place to dine), the need for recommendations for groups has increased. A wide range of Group Recommender Systems ( GRecSys ) has been developed to aggregate individual preferences to group preferences. We analyze 175 studies related to GRecSys . Previous works evaluate their systems using different types of groups (sizes and cohesiveness), and most of such works focus on testing their systems using only one type of item, called Experience Goods (EG). As a consequence, it is hard to get consistent conclusions about the performance of GRecSys . We present the aggregation strategies and aggregation functions that GRecSys commonly use to aggregate group members’ preferences. This study experimentally compares the performance (i.e., accuracy, ranking quality, and usefulness) using four metrics (Hit Ratio, Normalize Discounted Cumulative Gain, Diversity, and Coverage) of eight representative RecSys for group recommendations on ephemeral groups. Moreover, we use two different aggregation strategies, 10 different aggregation functions, and two different types of items on two types of datasets (EG and Search Goods (SG)) containing real-life datasets. The results show that the evaluation of GRecSys needs to use both EG and SG types of data, because the different characteristics of datasets lead to different performance. GRecSys using Singular Value Decomposition or Neural Collaborative Filtering methods work better than others. It is observed that the Average aggregation function is the one that produces better results.more » « less
-
Most e-commerce websites (e.g., Amazon and TripAdvisor) show their users an initial set of useful product reviews. These reviews allow users to form a general idea about the product's characteristics. The usefulness of a review is mainly based on a score that the website users provide. Studies have shown that this score is not a good indicator of a review's actual helpfulness. Nonetheless, most past works still use it to classify a review as helpful or not. With the growing number of reviews, finding those helpful ones is a challenging task. In this work, we propose NovRev, a new unsupervised approach to recommend a personalized subset of unread useful reviews for those users looking to increase their knowledge about a product. NovRev considers an initial set of reviews as a context and recommends reviews that increase the product's information. We have extensively tested NovRev against five baseline methods, using eight real-life datasets from different product domains. The results show that NovRev can recommend novel, relevant, and diverse reviews while covering more information about the product.more » « less
-
An overall rating cannot reveal the details of user’s preferences toward each feature of a product. One widespread practice of e-commerce websites is to provide ratings on predefined aspects of the product and user-generated reviews. Most recent multi-criteria works employ aspect preferences of users or user reviews to understand the opinions and behavior of users. However, these works fail to learn how users correlate these information sources when users express their opinion about an item. In this work, we present Multi-task & Multi-Criteria Review-based Rating (MMCRR), a framework to predict the overall ratings of items by learning how users represent their preferences when using multi-criteria ratings and text reviews. We conduct extensive experiments with three real-life datasets and six baseline models. The results show that MMCRR can reduce prediction errors while learning features better from the data.more » « less
-
null (Ed.)Most e-commerce websites (e.g., Amazon and TripAdvisor) show their users an initial set of useful product reviews. These reviews allow users to form a general idea about the product’s characteristics. The usefulness of a review is mainly based on a score that the website users provide. Studies have shown that this score is not a good indicator of a review’s actual helpfulness. Nonetheless, most past works still use it to classify a review as helpful or not. With the growing number of reviews, finding those helpful ones is a challenging task. In this work, we propose NovRev, a new unsupervised approach to recommend a personalized subset of unread useful reviews for those users looking to increase their knowledge about a product. NovRev considers an initial set of reviews as a context and recommends reviews that increase the product’s information. We have extensively tested NovRev against five baseline methods, using eight real-life datasets from different product domains. The results show that NovRev can recommend novel, relevant, and diverse reviews while covering more information about the product.more » « less
An official website of the United States government

Full Text Available